10th World Congress in Probability and Statistics
Contributed Session (live Q&A at Track 1, 11:30AM KST)
Contributed 25
Time Series Analysis II
Conference
11:30 AM
—
12:00 PM KST
Local
Jul 19 Mon, 9:30 PM
—
10:00 PM CDT
Robust Bayesian analysis of multivariate time series
Yixuan Liu (The University of Auckland)
2
There is a surge in the literature of nonparametric Bayesian inference on multivariate time series over the last decade, many approaches consider modelling the spectral density matrix using the Whittle likelihood which is an approximation of the true likelihood and commonly employed for Gaussian time series. Meier et al. (2019) proposes a nonparametric Whittle likelihood procedure along with a Bernstein polynomial prior weighted by a Hermitian positive de_nite (Hpd) Gamma process. However, it is known that nonparametric techniques are less effcient and powerful than parametric techniques when the latter specify the parameters which model the observations perfectly. Therefore, Kirch et al. (2019) suggests a nonparametric correction to the parametric likelihood in the univaraite case that takes the effciency of parametric models and amends sensitivities through the nonparamtric correction. Along with this novel likelihood, the Bernstein polynomial prior equipped with a Dirichlet process wight is employed. My current work is to extend the corrected Whittle likelihood procedure to the multivariate case, this will be done by combining the work of Meier et al. (2019) and Kirch et al. (2019). Precisely, the multivariate version of the corrected Whittle likelihood is proposed along with the Hpd Gamma process weighted Bernstein polynomial prior to implement Bayesian inference. A key study of this work is to prove the posterior consistency. In the talk, I will review the work done by Meier et al. (2019) and Kirch et al. (2019), then an introduction of the multivariate corrected Whittle likelihood procedure will be given.
Posterior consistency for the spectral density of non-Gaussian stationary time series
Yifu Tang (The University of Auckland)
1
Various nonparametric approaches for Bayesian spectral density estimation of stationary time series have been suggested in the literature, mostly based on the Whittle likelihood approximation. A generalization of this approximation has been proposed in Kirch et al. (2019) who prove posterior consistency for spectral density estimation in combination with the Bernstein-Dirichlet process prior for Gaussian time series. In this talk, I will talk about how to extend the posterior consistency result to non-Gaussian time series by employing a modified version of general consistency theorem of Shalizi (2009) for dependent data and misspecified models. As a special case, posterior consistency for the spectral density under the Whittle likelihood as proposed by Choudhuri, Ghosal and Roy (2004) is also extended to non-Gaussian time series.
ARMA models for zero inflated count time series
Vurukonda Sathish (Indian Institute of Technology Bombay)
2
Zero inflation is a common nuisance while monitoring disease progression over time. This article proposes a new observation driven model for zero inflated and over-dispersed count time series. The counts given the past history of the process and available information on covariates are assumed to be distributed as a mixture of a Poisson distribution and a distribution degenerate at zero, with a time dependent mixing probability, $\pi_t$. Since count data usually suffers from overdispersion, a Gamma distribution is used to model the excess variation, resulting in a zero inflated negative binomial (NB) regression model with the mean parameter $\lambda_t$. Linear predictors with autoregressive and moving average (ARMA) type terms, covariates, seasonality, and trend are fitted to $\lambda_t$ and $\pi_t$ through canonical link generalized linear models. Estimation is done using maximum likelihood aided by iterative algorithms, such as Newton Raphson (NR) and Expectation and Maximization (EM). Theoretical results on the consistency and asymptotic normality of the estimators are given. The proposed model is illustrated using in-depth simulation studies and a dengue data set.
Time-series data clustering via thick pen transformation
Minji Kim (Seoul National University)
4
Our ultimate goal is to cluster time-series data by suggesting a new similarity measure and an optimization algorithm. To illustrate, we propose a new time-series clustering method based on the Thick Pen Transformation (TPT) proposed by Fryzlewicz and Oh (2011), whose basic idea is to draw along the data with a pen of given thicknesses. The main contribution of this research is that we suggest a new similarity measure for time-series data based on the overlap or gap between the two thick lines after transformation. This method of applying TPT to measure the association exploits the strengths of the transformation; it is a multi-scale visualization technique that can be defined to provide some information on neighborhood values' temporal trends. Moreover, we further suggest an efficient iterative clustering optimization algorithm appropriate for the proposed measure. Our main motivation is to cluster a large number of physical step count data obtained from a wearable device. In addition, comparative numerical experiments are performed to compare our method to some existing methods. Real data analysis and simulation studies suggest that the proposed method can be applied in general for time series data distributed on the same side along the axis, whose similarities are measurable in the form of a proportion of overlapping areas.
Q&A for Contributed Session 25
0
This talk does not have an abstract.
Session Chair
Joungyoun Kim (Yonsei University)